http://www.johnhorgan.org/work16.htm
DISCOVER MAGAZINE COVER STORY, OCTOBER 2004
THE MYTH OF
MIND CONTROL
Will anyone ever decode the human brain?
by
John Horgan
All it took was a few jolts of electricity to turn
ordinary rats into roborats and for pundits to leap to the conclusion
that ordinary humans will soon be transformed into robohumans.
Scientists at the State University of New York Downstate Medical
Center in Brooklyn sparked a media frenzy two years ago when they
demonstrated that rats with electrodes implanted in their brains
could be steered like remote-controlled toy cars through an obstacle
course. Using a laptop equipped with a wireless transmitter, a
researcher stimulated cortical cells governing whisker sensations and
reinforced those signals by zapping the rats’ pleasure centers.
Presto! With this simple setup, the team had created living
robots.
Publications around the world promptly proclaimed the
imminence of those familiar science-fiction staples, surgically
implanted devices that electronically monitor and manipulate our
minds. The Economist warned that neurotechnology may be on the verge
of “overturning the essential nature of humanity,” and
New York Times columnist William Safire brooded that neural implants
might allow a “controlling organization” to hack into our
brains. In a more positive vein, MIT’s artificial-intelligence
maven Rodney Brooks predicted in Technology Review that by 2020
implants will let us carry out “thought-activated Google
searches.”
Hollywood’s remake of The Manchurian
Candidate raises the specter of a remote-controlled soldier turned
politician. In fact, officials at the Defense Advanced Research
Projects Agency, which funds the roborat team, have suggested that
cyborg soldiers could control weapons systems--or be controlled—via
brain chips. “Implanting electrodes into healthy people is not
something we’re going to do any time soon,” says Alan
Rudolf, the former head of the DARPA brain-machine research program.
“But 20 years ago, no one would have thought we’d put a
laser in the eye either. This agency leaves the door open to what’s
possible.”
Of course, that begs the question: Just how
realistic are these futuristic scenarios? To achieve truly precise
mind reading and control, neuroscientists must master the syntax or
set of rules that transform electrochemical pulses coursing through
the brain into perceptions, memories, emotions, and decisions.
Deciphering this so-called neural code—think of it as the
brain’s software--is the ultimate goal of many scientists
tinkering with brain-machine interfaces. “If you’re a
real neuroscientist, that’s the game you want to play,”
says John Chapin, a co-leader of the roborat research team.
Chapin
ranks the neural code right up there two other great scientific
mysteries: the origin of the universe and of life on Earth. The
neural code is arguably the most consequential of the three. The
solution could, in principle, vastly expand our power to treat ailing
brains and to augment healthy ones. It could allow us to program
computers with human capabilities, helping them become more clever
than HAL in 2001: Space Odyssey and C-3PO in Star Wars. The neural
code could also represent the key to the deepest of all philosophical
conundrums--the mind-body problem. We would finally understand how
this wrinkled lump of jelly in our skulls generates a unique,
conscious self with a sense of personal identity and autonomy.
In
addition to being the most significant mystery in science, the neural
code may also be the hardest to solve. Despite all they have learned
in the past century, neuroscientists have made little headway
figuring out exactly how brain cells process information. “It’s
a bit like saying after a hundred years of researching the body, ‘Do
you know if testes produce urine or sperm?’” says
neuroscientist V.S. Ramachandran of the University of California at
San Diego. “Our notions are still very primitive.”
The
neural code is often likened to the machine code that underpins the
operating system of a digital computer. Like transistors, neurons
serve as switches, or logic gates, absorbing and emitting
electrochemical pulses, called action potentials, which resemble the
basic units of information in digital computers. But the brain’s
complexity dwarfs that of any existing computer. A typical brain
contains 100 billion cells—almost as numerous as the stars in
the Milky Way galaxy. And each cell is linked via synapses to as many
as 100,000 others. The synapses between cells are awash in hormones
and neurotransmitters that modulate the transmission of signals, and
the synapses constantly form and dissolve, weaken and strengthen, in
response to new experiences.
Assuming that each synapse
processes one action potential per second and that these transactions
represent the brain’s computational output, then the brain
performs at least one quadrillion operations per second, almost a
thousand times more than the fastest supercomputers. Many more
computations may occur at scales below or above that of individual
synapses, says Steven Rose, a neurobiologist at England’s Open
University. “The brain may use every possible means of carrying
information.”
Optimists recall that in the middle of
the last century, some biologists feared the genetic code was too
complex to crack. Then in 1953 Francis Crick and James Watson
unraveled the structure of DNA and researchers quickly established
that the double helix mediates an astonishingly simple genetic code
governing the heredity of all organisms. The neural code is not
likely to yield such an elegant, universal solution. The brain is “so
adaptive, so dynamic, changing so frequently from instant to
instant,” says Miguel Nicolelis, a neural-prosthesis researcher
at Duke University, that “it may not be proper to use the term
‘code’.”
Nicolelis has faith that science
will one day ferret out all of the brain’s
information-processing tricks—or at least enough of them to
yield huge improvements in neural prostheses for people who are
paralyzed, blind, or otherwise disabled. Yet he believes that certain
aspects of our minds may remain inviolable because our most
meaningful thoughts and memories are written in a code, or language,
that is unique to each of us. “There will always be some
mystery,” Nicolelis says.
If so, the bad news is that
brains chips will never be sophisticated enough for us to learn new
languages instantly or have a “mental telephone”
conversation with a friend “simply by thinking about talking,”
as Popular Science has prophesied. The good news is we are not on the
verge of what the Boston Globe called a “Matrix-like cyberpunk
dystopia” in which we all become robohumans, controlled by
implants that “impose false memories” and “scan for
wayward thoughts.”
All the loose speculation provoked
by roborats is ironic considering that the experiment is just a
small-scale replay of a major media event that is 40 years old. In
1964, Jose Delgado, a neuroscientist from Yale University, stood in a
Spanish bullring as a bull with a radio-equipped array of electrodes,
or “stimoceiver,” implanted in its brain charged toward
him. When Delgado pushed a button on a radio transmitter he was
holding, the bull stopped in its tracks. Delgado pushed another
button, and the bull obediently turned to the right and trotted away.
The New York Times hailed the event as “probably the most
spectacular demonstration ever performed of the deliberate
modification of animal behavior through external control of the
brain.”
Delgado also conducted stimoceiver experiments
in cats, monkeys, chimpanzees, and even human psychiatric patients.
He showed that he could jerk the limbs of patients like marionettes,
as well as induce sensations such as euphoria, sexual arousal,
sleepiness, garrulousness, terror, and rage. In his1969 book Physical
Control of the Mind: Toward a Psychocivilized Society, Delgado
extolled the promise of brain-stimulation for curbing violent
aggression and other maladaptive traits.
Delgado’s
work—partly funded by the Pentagon—provoked fears of
government plots to transform citizens into robots. He dismissed this
“Orwellian possibility,” pointing out that the technology
was still much too unreliable and crude for precise mind control. The
major impediment to progress, he wrote, is that “our present
knowledge regarding the coding of information... is so elemental.”
Now 89 and living in San Diego, Delgado still follows research on
brain-machine interfaces. The potential of brain-stimulation “has
not been fully developed,” he says, because the neural code
remains “very difficult to untangle.”
In Delgado’s
heyday, neuroscientists believed that the brain employed just a
single, simple coding scheme discovered in the 1930’s by Lord
Edgar Adrian, a British neurobiologist. After isolating sensory
neurons from frogs and eels, Adrian showed that as the intensity of a
stimulus increases, so does a neuron’s firing rate, which can
peak as high as 200 spikes per second. In the next few decades,
experiments confirmed that the nervous systems of all animals employ
this method of conveying information, called a rate code. Researchers
also demonstrated that specific neurons are dedicated to extremely
specific tasks, such as seeing vertical lines, hearing sounds of a
specific pitch, or flexing a finger. Together, these findings
suggested that controlling the brain might be a simple matter of
delivering the right jolt of electricity to the right clusters of
brain cells.
It turns out that things are not so simple.
Recent research has undermined two basic assumptions about how the
brain processes information. One is the view of neurons as drones
single-mindedly carrying out specific tasks. Cells can be retrained
for different jobs, switching from facial expressions to finger
flexing, or from seeing red to hearing squeaks. Our neural circuits
keep shifting “massively and continuously” not only
during childhood but throughout our lives, says Michael Merzenich of
the University of California at San Francisco, whose research has
helped expose just how plastic neurons really are.
Neuroscientists
are also questioning whether the firing rate serves as a brain cell’s
sole means of expression. Rate codes are extremely inefficient. They
are analogous to a language that conveys information only through
modulations of a voice’s loudness, and they imply that the
brain is inherently noisy and wasteful. What counts as a genuine
signal is a surge in the firing rate of a cell from, say, 2 to 50
times a second; variations in the intervals between successive spikes
in a surge are considered irrelevant. But just as some geneticists
suspect that the junk DNA riddling our genomes actually serves hidden
functions, so some neuroscientists believe that information may lurk
within the fluctuating gaps between spikes. Schemes of this sort,
which are known as temporal codes, imply that significant information
may be conveyed by just a spike or two.
Another
time-sensitive code involves groups of neurons firing in precise
lockstep, or synchrony. Some evidence suggests that synchrony helps
us focus our attention. If you are at a noisy cocktail party and
suddenly hear someone nearby talking about you, your ability to
eavesdrop on that conversation and ignore all the others around you
could result from the synchronous firing of cells. “Synchrony
is an effective way to boost the power of a signal and the impact it
has downstream on other neurons,” says Terry Sejnowski, a
computational neurobiologist at the Salk Institute. He speculates
that the abundant feedback loops linking neurons allow them to
synchronize their firing before passing messages on for further
processing.
Then there is the chaotic code championed by
Walter J. Freeman of the University of California at Berkeley. For
decades, he has contended that far too much emphasis has been placed
on individual neurons and action potentials, for reasons that are
less empirical than pedagogical. The action potential “organizes
data, it is easy to teach, and the data are so compelling in terms of
the immediacy of spikes on a screen.” But spikes are ultimately
just “errand boys,” Freeman says; they serve to convey
raw sensory information into the brain, but then much more subtle,
larger-scale processes immediately take over.
The most vital
components of cognition, Freeman believes, are the electrical and
magnetic fields, generated by synaptic currents, that constantly
ripple through the brain. These fields are chaotic, in the sense that
they conceal a hidden, complex order and are subject to minute
influences--the so-called butterfly effect. A sound enters the ear
and triggers a stream of action potentials, which nudge the waves of
electrical activity coursing through the cortex into a particular
chaotic pattern, or attractor. The result is fantastically precise,
almost instant comprehension. “You pick up the telephone and
hear a voice,” Freeman says, “and before you even know
the meaning of the words, you know who you’re talking to and
what her emotional state is.”
Although none of these
alternatives to rate codes has been proven yet, so little is known
about how the brain processes information that “it’s
difficult to rule out any coding scheme at this time,” argues
neuroscientist Christof Koch of Caltech. Koch and Itzhak Fried, who
is both a neuroscientist and practicing neurosurgeon at UCLA Medical
School, recently uncovered evidence for a coding scheme long ago
discarded as implausible. This scheme has been disparaged as the
“grandmother cell” hypothesis, because in its reductio ad
absurdum version it implies that our memory banks dedicate a single
neuron to each person, place, or thing that inhabits our thoughts,
such as Grandma. Most theorists assume that such a complex concept
must be underpinned by large populations of cells, each of which
corresponds to one component of the object (the bun, the bifocals,
the leather mini-skirt).
Yet Fried and Koch have found neurons
that act very much like grandmother cells. The subjects were
epileptics who had electrodes temporarily inserted into their brains
to provide information that could guide surgical treatment. The
researchers monitored the output of the electrodes while showing the
patients images of animals, people, and other things. A neuron in the
amygdala of one patient spiked only in response to three quite
different images of Bill Clinton: a line drawing, a presidential
portrait, and a group photograph. A cortical cell in the other
patient responded in a similar way to images of characters from The
Simpsons. In future experiments, Koch and Fried plan to show patients
photographs of their grandmothers to see if they can locate actual
grandmother cells.
It makes intuitive sense, Koch says, that
our brains should dedicate some cells to people or other things
frequently in our thoughts. He adds that his findings might seem less
surprising if one realizes that neurons are much more than simple
“threshold” switches that fire whenever incoming pulses
from other neurons exceed a certain level. A typical neuron receives
input from thousands of other cells, some of which inhibit rather
than encourage the neuron’s firing. The neuron may in turn
encourage or suppress firing by some of those same cells in complex
positive or negative feedback loops.
In other words, a single
neuron may resemble less a simple switch than a customized
minicomputer, sophisticated enough to distinguish your grandmother
from Grandma Moses. If this view is correct, meaningful messages
might be conveyed not just by hordes of neurons screaming in unison
but by small groups of cells whispering, perhaps in a terse temporal
code. Discerning such faint signals within the cacophony of the brain
will “incredibly difficult,” Koch notes, no matter how
much neurotechnology advances.
Efforts to detect the whispers
amid the cacophony are further complicated by the improvisational
dexterity of the brain. Studies of the motor cortex, which underpins
body movement, have shown that the brain invents entirely new coding
schemes for novel situations. In the 1980’s, researchers
discovered neurons in a monkey’s motor cortex that peak in
their firing rate when the monkey moves its hand in a specific
direction. Rather than falling silent when the hand diverges even
slightly from its so-called preferred direction, the cells’
firing rate diminished in proportion to the angle of
divergence.
Several teams, including one led by Andrew
Schwartz of the University of Pittsburgh, have sought to exploit
these findings to create neural prostheses for paralyzed patients.
They have demonstrated that electrodes implanted in a monkey’s
motor cortex can detect signals accompanying a specific arm movement;
these same signals—after being processed by an algorithm--can
initiate similar movements by a robot arm. If the monkey’s arm
is tied down, it can learn to control the robot arm through pure
thought—but with an entirely different set of neural signals.
These findings dovetail with others showing that neurons’
coding behavior shifts in different contexts. “What you’re
aiming at is sort of a moving target,” Schwartz elaborates. “If
you make an estimate of something at one point in time, that doesn’t
mean it’s going to stay that way.”
The mutability
of the neural code is not necessarily bad news for neural-prosthesis
designers. In fact, the brain’s capacity for inventing new
information-processing schemes is thought to explain the success of
artificial cochleas, which have been implanted in more than 50,000
hearing-impaired people. Commercial versions typically employ a
half-dozen electrodes, each of which channels sounds of a different
pitch to electrodes embedded in the auditory nerve. Like an old
telephone party line, the electrodes can stimulate not just a single
neuron but many simultaneously.
When cochlear implants were
introduced in the mid-1980’s, many neuroscientists expected
them to work poorly, given their crude design. But they work well
enough for some deaf people to converse over the telephone,
particularly after a break-in period during which channel settings
are adjusted to provide the best reception. Patients’ brains
somehow figure out how to make the most out of the strange
signals.
There are surely limits to the brain’s ability
to make up for scientists’ ignorance, as the poor performance
of other neural prostheses suggests. Artificial retinas,
light-sensitive chips that mimic the eye’s signal-processing
ability and stimulate the optical nerve or visual cortex, have been
tested in a handful of blind subjects who usually “see”
nothing more than phosphenes, or flashes of light. And like
Schwartz’s monkeys, a few paralyzed humans have learned to
transmit commands to computers via chips embedded in their brains,
but the prostheses are still slow and unreliable.
Nevertheless,
the surprising effectiveness of artificial cochleas—together
with other evidence of the brain’s adaptability and
opportunism—has fueled much of the recent optimism over the
prospects for brain-machine interfaces. “This is very relevant
to why we think we’re going to be successful,” says Ted
Berger of the University of Southern California in Los Angeles, who
is leading a project to create implantable brain chips that can
restore or enhance memory. “We don’t need a perfectly
accurate model of a memory cell,” he remarks. “We
probably just have to be close, and the rest of brain will adapt
around it.”
Thus far, Berger’s experiments have
been confined to slices of rat brain in petri dishes. For more than a
decade, he has embedded arrays of electrodes in slices of
hippocampus--which plays a role in learning and memory--and recorded
neurons’ responses to a wide range of electrical stimuli. His
observations have made him a firm believer in temporal codes;
hippocampal cells seem to be exquisitely sensitive not only to the
rate but also to the timing of incoming pulses. “The evidence
for temporal coding is indisputable,” Berger says. Within three
years, he hopes to have chips that mimic the signal-processing
properties of hippocampal tissue ready for testing in live
rats.
Berger boldly predicts that someday chips like his might
restore memory capacity to stroke victims or help soldiers instantly
learn complex fighting procedures, like the characters in The Matrix.
But in some respects Berger is quite modest. He acknowledges that his
memory chips could not be used to identify and manipulate specific
memories. His chips can simulate “how neurons in a particular
part of the brain change inputs into outputs. That’s very
different from saying that I can identify a memory of your
grandmother in a particular series of impulses.” To achieve
this sort of mind-reading, scientists must compile a “dictionary”
for translating specific neural patterns into specific memories,
perceptions, and thoughts. “I don’t know that it’s
not possible,” Berger says. “It’s certainly not
possible with what we know at the moment.”
“Don’t
count on it in the 21st century, or even in the 22nd, ” says
Bruce McNaughton of the University of Arizona. With arrays of as many
as 50 electrodes, McNaughton has monitored neurons in the hippocampus
of rats as they run through a maze. Once a rat learns to navigate a
maze, its neurons discharge in the same patterns whenever it runs the
maze. Remarkably, when the rat sleeps after a hard day of maze
running, the same firing pattern often unfolds; the rat is presumably
dreaming of the maze. This pattern could be said to represent--at
least partially--the rat’s memory of the maze.
McNaughton
emphasizes that the same maze generates a different firing pattern in
different rats; even in the same rat, the pattern changes if the maze
is moved to a different room. He thus doubts whether science can
compile a dictionary for decoding the neural signals corresponding to
human memories, which are surely more complex, variable, and
context-sensitive than those of rats. At best, McNaughton suggests,
one might construct a dictionary for a single person by monitoring
the output of all her neurons for years while recording all her
behavior and her self-described thoughts. Even then, the dictionary
would be imperfect at best, and it would have to be constantly
revised to account for the individual’s ongoing experiences.
This dictionary would not work for anyone else.
Delgado hinted
at the problem more than 30 years ago in Physical Control of the Mind
when he raised the knotty question of meaning. With new and improved
stimoceivers and a better understanding of the neural code, he said,
scientists might determine what we are perceiving—a piece of
music, say—based on our neural output. But no conceivable
technology will be subtle enough to discern all the memories,
emotions, and meanings aroused in us by our perceptions, because
these emerge from “the experiential history of each
individual.” You hear a stale pop tune, I hear my wedding
song.
This is one point on which many neuroscientists agree:
The uniqueness of each individual represents a fundamental barrier to
science’s attempts to understand and control the mind. Although
all humans share a “universal mode of operation,” says
Freeman, even identical twins have divergent life histories and hence
unique memories, perceptions, predilections. The patterns of neural
activity underpinning our selves keep changing throughout our lives
as we learn to play checkers, read Thus Spoke Zarathustra, fall in
love, lose a job, win the lottery, get divorced, take
Prozac.
Freeman thinks the prospects are good for relatively
simple neural prostheses, such as devices that restore vision to the
blind or that let paralyzed people send simple commands to a
computer. But he suspects that our brains’ complexity and
diversity rule out more ambitious projects, such as mind-reading. If
artificial-intelligence engineers ever succeed in building a truly
intelligent machine based on a neural coding scheme similar to ours,
“we won’t be able to read its mind either,” Freeman
says. We and even our cyborg descendants will always be “beyond
Big Brother, and I’m very grateful for that.”
[BOX]
HARDWARE
HURDLES
Last year, the engineering journal IEEE Spectrum
proclaimed that magnetic-resonance imaging and electroencephalography
are “bringing us closer to a world where mind reading is
possible.” In reality, detecting specific thoughts with these
external scanners is like trying to eavesdrop on individual
conversations at a baseball game while standing outside the stadium
and listening to the roar of the crowd.
Similarly, a technique
called transcranial magnetic stimulation, which excites specific
brain regions with electromagnetic fields, has been touted for its
potential to curb depression, heighten artistic ability, and
otherwise alter our minds. But this external stimulation method will
never be precise enough to, for instance, pinpoint and delete
specific memories, in spite of what films such as Total Recall and
Eternal Sunshine of the Spotless Mind have suggested. Most
neuroscientists believe that conversing with the brain in its own
language will require implanted electrodes.
In the past
decade, the technology of tiny electrodes that can touch individual
brain cells has leaped forward. Researchers have crafted arrays of
hundreds and even thousands of electrodes, each of which can monitor
a separate cell. However, implanting electrodes through holes in the
skull poses a risk of infection and brain damage, so testing on
healthy humans is not allowed. Electrodes also constantly lose
contact with neurons; at any one moment an array of 100 electrodes
may make contact with only half that many cells. “Getting a
stable connection,” says William Heetderks, a neural-prosthesis
expert at the National Institutes of Health, “is still a bit of
an issue.”
http://www.johnhorgan.org/work12.htm
Discover, June 2005
In the neurosurgery ward of the David
Geffen School of Medicine at UCLA, Danny, a stocky 21-year-old
college student wearing blue pajamas and sporting a wispy goatee,
sits on a bed watching one photo after another flash on a laptop
screen. Several macho movie stars appear in rapid succession,
including Arnold Schwartzenegger, Steven Seagal, Sylvester Stallone,
and Mr. T, the mohawked brawler who plays Stallone’s rival in
the boxing film Rocky III. At first glance, one might guess that
Danny has volunteered for a Hollywood survey: Who’s your
favorite action hero? In fact, Danny is the real hero. The black
cables emerging from the white turban wrapped around his skull hint
at his role in investigating a truly profound question: How do
thoughts form in the human brain?
Danny suffers from epilepsy,
and he has had electrodes temporarily implanted into his brain to
monitor seizures; ideally, the electrodes will pinpoint the neural
defect triggering his seizures so it can be surgically removed.
During the week or so that the electrodes remain in Danny’s
brain, he has volunteered to participate in experiments aimed at
understanding the underpinnings of cognition. Such research is quite
rare; for obvious ethical reasons, neuroscientists have few
opportunities to gather data from deep inside a living human
brain.
This particular experiment touches on one of
the most challenging puzzles of neuroscience: How do brain cells
recognize items as complicated as a toaster oven, the number nine, a
zebra, Bill Clinton, or the film character Rocky? Are single cells
like transistors in a computer, or pixels on a television screen,
contributing just minute pieces of information that only when
combined with the output of thousands or millions of other cells form
the complex pattern that means Rocky? Or can a single neuron learn to
recognize that face?
Most neuroscientists adhere to the pixel
view of neurons, arguing that individual cells can’t possibly
be clever enough to make sense of a concept as subtle as Rocky; after
all, the world’s fastest supercomputers have difficulty
performing that pattern-recognition feat. But Itzhak Fried, the
neurosurgeon who implanted the electrodes in Danny’s brain and
who leads this UCLA research program, believes he has found "thinking
cells" in the brains of subjects like Danny. If he’s
right, neuroscientists may be forced to overhaul their view of how
the human brain works.
A true thinking cell should be able to
recognize a person or fictional character even in many different
guises. Danny is a big fan of Hollywood action heroes, especially
Rocky; he owns DVDs of all four films in the series and never tires
of watching them. So, amid the images that flash on the laptop
screen, the research team has included shots that show Rocky running
through the streets of Philadelphia, staring longingly at his
girlfriend Adrian, or draped in the American flag after defeating his
Russian rival. Now and then, to test whether a cell’s
recognition cuts across sensory modes, Rocky or some other name is
spelled out on the laptop screen or uttered by an eerie synthesized
voice.
As Danny peers at the laptop, signals
stream from more than 100 ultra-thin electrodes, each sensitive
enough to detect the murmuring of a single cell—and
into the cables that emerge from his head. The cables ferry the
signals across the room to a cabinet crammed with amplifiers. A
computer atop the cabinet displays the readouts from Danny’s
cells as a series of multi-colored lines unfolding across a screen.
Every now and then, a line jerks upward, as one of Danny’s
cells sputters in response to an image or name. Rodrigo Quian
Quiroga, the Argentinian neuroscientist overseeing this research
session, points to one especially energetic squiggle and whispers,
"That’s Rocky."
The vast majority of
modern brain research involves technologies such as magnetic
resonance-imaging, positron emission tomography, and
electroencephalography. All measure neural activity from outside the
skull. Figuring out how brains work with external scanners is
like studying life on a cloud-shrouded planet with satellites.
Implanted electrodes, by contrast, are like probes that drop down to
the planet’s surface. Electrode studies of monkeys and other
animals whose brains resemble ours have yielded valuable insights,
but these creatures cannot describe their subjective sensations.
A
handful of other hospitals are carrying out electrode research that
piggybacks on the clinical treatment of patients with epilepsy,
Parkinson’s disease, and other neurological disorders. But no
research program approaches UCLA’s in experience,
sophistication, or published results, says Christof Koch, a
neuroscientist at the California Institute of Technology who has been
collaborating with the UCLA group since 1998. "There is no one
technique that’s going to give you all the answers" to the
riddle of cognition, Koch says. "But this is one that’s
very, very good, and we’re getting better at it."
Fried,
the driven yet affable commander-in-chief of the program, founded it
in 1992 after leaving Yale. Since then more than 100 of his epileptic
patients with electrodes implanted in their brains for diagnostic
purposes have volunteered as subjects for basic research. From the
outset, Fried has been protective of his patients and their privacy;
this is the first time he has allowed a reporter to watch him and his
team at work.
Fried was born and raised in Israel, and he
spends several months a year working at a hospital in Tel Aviv as
well as at UCLA. He flew from Israel to Los Angeles on a Sunday, and
during a three-hour operation on Monday he drilled ten holes in
Danny’s skull and inserted the electrodes into his brain. The
following day, wearing a white lab coat over aqua scrubs, Fried
strode into a conference room packed with researchers who had
gathered to discuss plans for Danny. The team included two
undergraduates who flew here from the University of Pennsylvania, a
few graduate students from UCLA and Caltech, a couple of postdocs,
and two physicians.
Fried briskly provided background on the
patient: Danny is a bright, friendly young man, he said, who is
looking forward to working with the research team as a way to "break
the boredom" of his hospital stay. "Okay, let’s get
down to practical issues," he continued in his distinctive
Israeli accent. Rapid-fire, he queried the researchers on the status
of their "paradigms." He prefers that term to
"experiments," which might suggest electrodes had been
implanted in Danny’s brain primarily for research rather than
diagnostic purposes.
The discussion keeps returning to
problems with data storage and analysis. Several researchers asked
for upgrades in equipment for storing data—which the
microelectrode experiments generate by the terabyte--and Fried said
he’d see what he could do. The researchers also received
detailed instructions on how to grapple with a major technical
challenge: electrodes in
patients’ brains often detect pulses from two or more nearby
neurons at the same time, which may show up in the
computer as one big signal. Quiroga has written a program that
mathematically unravels overlapping pulses. The process, called
cluster-cutting, makes it possible to extract more information from
the data, at least in principle. But some of Quiroga’s
colleagues were still trying to familiarize themselves with the fine
points of what the team has dubbed "Rodrigo’s code."
The researchers had prepared more than enough studies to keep
Danny from becoming bored. One called for him to view
computer-generated pictures of celebrities morphing into each other:
Mr. T into Will Smith, and Sly into Arnie. The objective: to see
if a cell that lights up for Sly fires more slowly as the photo
gradually morphs into Arnie, or just abruptly falls silent. In
other words, are face-recognition cells like simple on-off switches,
or can they act like dimmers?
Another paradigm, called X-Cab,
is designed to yield insights into how we navigate. More than a
decade ago microelectrode studies of rats and monkeys revealed place
cells that light up when the animals move to a particular spot in a
maze. Previous versions of X-Cab, which involves driving a virtual
taxi through a cyber-city, have confirmed that humans have place
cells, too, as well as view cells
that respond to specific landmarks, and goal
cells that respond to the driver’s ultimate
destination.
Arne Ekstrom, a UCLA postdoc, and Indra
Viskontas, a graduate student, had made preparations for Danny to
test drive a new version of the X-Cab program, which allows the
driver to pick up and discharge passengers. Fried asked if they had
made the changes he requested in the paradigm. "Almost all of
them," Viskontas replied, adding that she and Ekstrom
"respectfully" disagreed with some of Fried’s
requests and wanted to discuss them with him.
Fried nodded.
"Any more questions?" he asked, scanning the room one last
time. "If not, to work."
Back in his office, Fried
recalled how he ended up overseeing this unusual program. One of his
role models was Wilder Penfield, the Canadian surgeon who carried out
pioneering operations on epileptics in the 1950’s and 1960’s.
After removing the skull-cap of patients, Penfield electrically
tickled different spots of their brains with wires and asked them
what they felt; because the brain lacks pain receptors, the patients
needed no anesthesia. They could report feeling a tingle in their
left forefinger, seeing a blue flash, hearing a low-pitched
hum.
This procedure not only helped to guide Penfield’s
surgical treatment of each patient; it also yielded clues to what
different parts of the brain do. "Here was somebody who was
really looking at the human mind," Fried said, "but at same
time he was helping a human being." Fried’s method is much
more refined than Penfield’s. Fried
typically drills a dozen holes in the patient’s skull and
inserts a dozen hollow macroelectrodes, which can
detect large-scale electrical waves emanating from a
seizure.
Protruding from
the end of each macroelectrode are as many as ten flexible
microelectrodes that can detect the pulses of individual neurons.
The patient’s clinical status dictates the placement of the
macroelectrodes. In Danny’s case, tests suggest that his
seizures originate in his frontal lobes, so Fried inserted most of
the macroelectrodes in that region. He embedded one macroelectrode in
Danny’s hippocampus, a minute region that underpins memory and
is often implicated in epileptic seizures.
The patient’s
clinical health and comfort, Fried emphasized, take precedence over
research objectives. Even the most carefully planned paradigm must be
set aside if the patient becomes bored, tired, frustrated, gets a
headache, or just wants to be left alone. Fried carefully screens
prospective colleagues to ensure that they treat his patients like
human beings rather than laboratory animals. "The person who
will not do well," he said, "is a compulsive-obsessive
animal physiologist who, if he doesn’t control all the
variables, falls apart." But Fried also said he believes that
"there is a responsibility" to take advantage of these rare
chances to learn more about the behavior of individual neurons, which
he calls the building blocks of cognition.
Following
Penfield’s example, Fried occasionally does studies that
involve stimulating brain cells with minute electrical jolts. In
1998, he and three colleagues discovered that a female patient burst
into laughter every time they stimulated a spot at the top of her
brain called the supplementary motor area. Her hilarity was not just
physiological; the woman felt subjective sensations of "merriment
or mirth." She displaying a syndrome known as confabulation—she
invented reasons for her hilarity, telling the researchers at one
point, "You guys are just so funny... standing around."
But
most of Fried’s findings, which he has described in more than a
dozen papers in such leading journals as Nature, Neuron, and
Proceedings of the National Academy of Sciences, involve not
electrically stimulating neurons but passively listening to their
chatter as a patient performs various tasks. In one set of
experiments, Fried, Koch, and Gabriel Krieman, a Caltech grad
student, found cells that light up both when a subject looks at an
image—of a baseball, say, or a woman’s face--and when he
closes his eyes and recalls the image in his minds’ eye. The
results provide the most convincing evidence yet that human
perception and imagination share neural circuitry.
The
experiments that have attracted the most attention are those
supporting the existence of "thinking cells." The
possibility of such cells has been debated at least since the 1950s,
when researchers found single neurons in the visual cortex of cats
and other animals that respond to simple stimuli, such as lines
oriented at a certain angle or moving in a specific direction or
light of a particular wavelength. Some theorists wondered whether
single neurons might also respond to much more complicated stimuli,
such as specific people.
Once known as gnostic cells,
after the Greek word for knowledge, they were dubbed grandmother
cells in the late 1960s by neuroscientist Jerry Lettvin of the
Massachusetts Institute of Technology. Lettvin
meant to make fun of—if not to dismiss--speculation that single
cells could be dedicated to recognition of family members or other
individuals who loom large in an individual’s mental universe.
[fnc: There's apparently a long history on this.
Found this reference: Gross, C. (2002). Genealogy of the
Grandmother Cell. The Neuroscientist 8, 512-518.] In one
paper, he joked that mother-smothered neurotics such as Portnoy, the
hero of Phillip Roth’s novel Portnoy’s Complaint, could
be cured of their Oedipal disorders by having all the mother cells
purged from their brains.
Many neuroscientists found it hard
to believe that a single cell could recognize an inanimate object,
let alone a human being. Even objects as simple as chairs, trees, or
buildings come in an almost infinite variety of forms, and the same
object looks different from different perspectives or in different
contexts. Neuroscientists were therefore startled in the early 1970s
when experiments on monkeys by Charles Gross of Princeton turned up
cells that respond selectively to hands and faces--not specific faces
but faces in general.
No one had really followed up on
Gross’s findings, however, until the late 1990s, when
Fried and his colleagues started reporting how epileptic patients
reacted to various images. Some neurons were apparently smart enough
to comprehend the highly abstract concept "non-human animal."
Their neurons fired when the patient was shown a picture of a tiger,
eagle, antelope, and rabbit, but not when shown pictures of humans or
inanimate objects. Other cells favored images only of food, or of
buildings, or of human faces. Some cells responded to all faces, but
others were picky, firing for male faces but not female ones, or
scowling faces but not smiling ones—or, finally, faces of
specific individuals.
One of the first neurons of this type
was the
so-called Bill Clinton cell, which was buried deep
in the amygdala of a female patient. The cell responded to three very
different images of the former President: a line drawing of Clinton
laughing; a formal painting of him; and a photograph of him mingling
with other dignitaries. The cell remained mute when the patient
viewed images of other people, including male politicians and
celebrities. Fried’s group found cells in other volunteers that
responded in this same highly selective way to actors, including
Jennifer Anniston, Brad Pitt, and Halle Berry.
One
reason celebrities have played a prominent role in Fried’s
experiments is that their photographs are often easier to come by
than images of a patient’s own relatives. But as part of her
dissertation project on biographical memory, the UCLA graduate
student Viskontas has for several years been showing patients
photographs of family members. Viskontas is reluctant to reveal
details about her results, which have not been published yet. But she
confirms that she has found neurons that respond to a particular
relative: father, mother, brother, sister, grandfather, and, yes,
grandmother. The experiments have also found cells that light up
when a patient sees an image of himself. Call them narcissism
cells.
Viskontas is wary of over-interpreting these results or
others emerging from the UCLA program. She does not believe, for
example, that they support the most extreme version of the
grandmother-cell hypothesis, in which cells are exclusively and
permanently assigned to one person, place, or thing. The past few
decades, she adds, have revealed that brain cells are versatile, or
"plastic," changing their roles in response to new
experiences. The UCLA experiments may not be detecting long-term
memory but so-called working-memory, in which cells are
temporarily assigned to the job of representing Grandma, Jennifer,
Aniston, or Rocky only as a result of the stimulation provided by the
experiment.
Koch isn’t so sure. It would make sense, he
argues, for our brains to dedicate some cells to people or other
things frequently in our thoughts. The larger significance of the
UCLA experiments, he says, is that neuroscientists may have to change
their view of neurons as simple switches, transistors, or pixels.
Each neuron may be more like a sophisticated
computer, running customized software. After all,
individual neurons can receive input from hundreds of thousands of
other cells, some of which inhibit rather than encourage the neuron’s
firing. The neuron may in turn encourage or suppress firing by some
of those same cells in complex positive or negative feedback
loops.
What excites Koch most about the thinking-cell results
is the possibility that they may illuminate a fundamental component
of cognition. Our comprehension of the world, he says, requires that
we ignore much of the data flooding in through our senses. When we
turn on a TV or reminisce about a movie, our brains somehow instantly
compress raw sensory data into meaningful concepts and categories.
This feat may be accomplished at least in part, Koch says, by cells
that represent not just this or that particular image of Rocky but
"the platonic ideal of Rocky."
Quiroga notes that a
short story by a fellow Argentinian, Jorge Luis Borges, spelled out
what would happen to us if we lacked this capacity for compression.
Funes the Memorious tells the tale of a youth who, after falling from
a horse and striking his head, becomes gifted, or cursed, with
photographic recall of every minute experience. He is so overwhelmed
by the infinitude of his perceptions that he retreats into a darkened
room. "To think is to forget a difference, to generalize, to
abstract," Borges writes. "In the overly replete world of
Funes there were nothing but details." Unlike Danny, Funes had
lost the capacity to perceive the platonic ideal of Rocky.
In
Danny’s hospital room, weighty philosophical issues yield to
more practical concerns, like getting a tray on rollers properly
positioned over his lap. "I’m not an engineer, just a
scientist," Quiroga says apologetically as he struggles with the
balky tray. He eventually succeeds with the help of Emily Ho, who is
an engineer, and the team’s chief troubleshooter.
As
other researchers come and go, Ho remains in Danny’s room,
manning the amplifiers, computers, and other equipment. When the
readouts from Danny’s microwires go haywire, Ho starts checking
lights and other appliances that might be causing electrical
interference. Within minutes she traces the problem to the
remote-controller that Danny uses to make his bed go up and down.
After she unplugs it, the readouts return to normal.
The
atmosphere in the room is surprisingly cheery. One reason is the
frequent presence of Danny’s father, Bill, owner of a carpeting
business. Although silence reigns during experiments, so that Danny
doesn’t get distracted, between sessions Bill teases both the
researchers and his son. At one point, Ho, watching signals from
Danny’s neurons scroll across a computer screen, tells him that
he’s got "great brain cells."
"Are you
kidding?" Bill exclaims. "He’s got lousy brain
cells!"
Danny grins, even more so later after his father
fumbles a styrofoam container of Chinese food, sending chicken chunks
skidding across the floor. "Who’s got the lousy cells?"
Danny chortles.
Bill turns serious when asked why he and his
wife agreed to let their son participate in these studies. "It’s
a duty," Bill says. Danny, Bill points out, has benefited
because many other patients before him have volunteered to be
subjects for research. In the future, people suffering from epilepsy
or other brain disorders may benefit from what the UCLA team learns
from Danny.
For his part, Danny says he enjoys hanging out
with the scientists and doing the experiments--"as long as
there’s no math."
Brain cells more complex than we think
By Steve Connor
London - It only takes one brain cell to recognise a Hollywood
celebrity, according to a study into how the human mind recalls a
familiar face.
Scientists have shown that the faces of stars
such as Halle Berry, Jennifer Aniston and Brad Pitt can each
stimulate a nerve cell in the brain which seems to recognise that
face alone.
Neuroscientists said the results suggest that
present-day thinking about how the brain recognises and remembers
familiar objects and people may have to be overhauled.
The
findings suggest that individual brain cells are more complex than
previously thought and rather than being mere electronic relays for
transmitting signals, they are miniature computers in their own
right, capable of processing complex information.
Instead
of brain cells acting as a network of individual units which are not
particularly important on their own, scientists may revitalise an
older theory suggesting that a separate nerve cell is responsible for
triggering the recognition of a familiar face.
The scientists
who carried out the research said that along with the old idea of a
"grandmother cell" responsible for recognising your
grandmother, there may be an entire population of brain cells for
recognising other people, such as the Halle Berry or Brad Pitt cell.
Itzhak Fried, professor of neurosurgery at the University of
California in Los Angeles, said that the discovery, published in the
journal Nature on Friday, could lead to new ways of augmenting a
damaged brain or a failing memory with artificial aids.
"This
new understanding of individual neurons as thinking cells is an
important step toward cracking the brain's cognition code,"
Fried said.
"As our understanding grows, we one day may
be able to build cognitive prostheses to replace functions lost due
to brain injury or disease, perhaps even for memory," he said.
The study was carried out on eight epilepsy patients who were
undergoing treatment with the help of micro-electrodes implanted deep
into their brains. The scientists used the patients' clinical
situation to test their ideas about recognition.
As the
patients were shown images of famous people and landmarks, the
scientists recorded electrical activity in individual brain cells or
"units".
"For
example, in one case, a unit responded only to three completely
different images of the ex-president Bill Clinton.
"Another
unit from a different patient responded only to images of The
Beatles, another one to cartoons from The Simpson's television series
and another one to pictures of the basketball player Michael Jordan,"
the scientists said.
In another patient a single brain cell
responded to different pictures of Jennifer Aniston but not to other
famous or non-famous faces, landmarks or animals.
Yet the
same brain cell did not respond to Jennifer Aniston when she appeared
together with her former husband Brad Pitt.
In another
instance, an individual brain cell responded to a picture of Halle
Berry, even with many of her features obscured when she was dressed
as Catwoman. The same cell even responded to the words "Halle
Berry".
Christof Koch, professor of computation and
neural systems at the California Institute of Technology, said the
results of the study were extremely surprising.
"Our
findings fly in the face of conventional thinking about how brain
cells function. Conventional wisdom views individual brain cells as
simple switches or relays.
"In fact, we are finding that
neurons are able to function more like a sophisticated computer,"
Koch said. - The Independent
This article was originally published on page 10 of The Cape Times on June 24, 2005
Posted on Wed, Jun. 22, 2005 |
|
NEW YORK - Halle Berry? Jennifer Aniston? Everybody knows them. And now a surprising study finds that even individual cells in your brain act as if they recognize them. The work could help shed light on how the brain stores information, an expert said. When scientists sampled brain cell activity in people who were scrutinizing dozens of pictures, they found some individual cells that reacted to a particular celebrity, landmark, animal or object. In one case, a single cell was activated by different photos of Berry, including some in her "Catwoman" costume, a drawing of her and even the words, "Halle Berry." The findings appear in a part of the brain that transforms what people perceive into what they'll eventually remember, said Dr. Itzhak Fried of the University of California, Los Angeles, a senior investigator on the project. The findings do not mean that a particular person or object is recognized and remembered by only one brain cell, Fried said. "There is not only one cell that codes for Jennifer Aniston. That would be impossible," Fried said. Nor do they mean that a given brain cell will react to only one person or object, he said, because the study participants were tested with only a relatively limited number of pictures. In fact, some cells were found to respond to more than one person, or to a person and an object. What the study does suggest, Fried and colleagues say in Thursday's issue of the journal Nature, is that the brain appears to use relatively few cells to record something it sees. That's in contrast to the idea that it uses a huge network of brain cells instead. It's surprising that an individual neuron would react so specifically to a given person, said the study's other senior investigator, Christof Koch of the California Institute of Technology. "It's much more specific than people used to think." Charles Connor, who studies how the brain processes visual information but who didn't participate in the new study, called the results striking. Nobody would have predicted that conceptual information relating to Aniston, for example, would be signaled so clearly by single cells, said Connor, who works at Johns Hopkins University. The "really dramatic finding," he said, is that a single brain cell can respond so consistently to completely different pictures of a given person. "That will surprise everybody," Connor said. The part of the brain the researchers studied draws heavily on memory as well as signals from what the eye sees, so the result may illustrate how memory is represented in the brain and how it relates to visual signals, he said. He noted that in one participant, one brain cell responded both to Aniston and to Lisa Kudrow, her co-star on the TV hit "Friends." "That's a tantalizing glimpse at how neurons represent concepts like membership in the cast of `Friends,' and could lead to much more extensive studies of how conceptual information is organized in human memory," he said. The researchers tested eight people with epilepsy who'd had electrodes placed in their brains so that doctors could track down the origins of their seizures. The electrodes monitored the activity of a small fraction of cells in a part of the brain called the medial temporal lobe. The researchers kept track of which cells became activated as the participants looked at images of people, landmarks and objects on a laptop computer. One participant had a brain cell that reacted to different pictures of Aniston, for example, but was not strongly stimulated by other famous or non-famous faces. Oddly, when that participant was shown photos of Aniston paired with actor Brad Pitt, from whom Aniston later separated, the brain cell didn't respond. "I don't know if it was a prophetic thing," Fried said. ON THE NET www.nature.com |